Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Self-regularization optimization methods for Non-IID data in federated learning
Mengjie LAN, Jianping CAI, Lan SUN
Journal of Computer Applications    2023, 43 (7): 2073-2081.   DOI: 10.11772/j.issn.1001-9081.2022071122
Abstract298)   HTML13)    PDF (4171KB)(226)       Save

Federated Learning (FL) is a new distributed machine learning paradigm that breaks down data barriers and protects data privacy at the same time, thereby enabling clients to collaboratively train a machine learning model without sharing local data. However, how to deal with Non-Independent Identical Distribution (Non-IID) data from different clients remains a huge challenge faced by FL. Some existing proposed solutions to this problem do not utilize the implicit relationship between local and global models to solve the problem simply and efficiently. To address the Non-IID issue of different clients in FL, novel FL optimization algorithms including Federated Self-Regularization (FedSR) and Dynamic Federated Self-Regularization (Dyn-FedSR) were proposed. In FedSR, self-regularization penalty terms were introduced in each training round to modify the local loss function dynamically, and by building a relationship between the local and the global models, the local model was closer to the global model that aggregates rich knowledge, thereby alleviating the client drift problem caused by Non-IID data. In Dyn-FedSR, the self-regularization term coefficient was determined dynamically by calculating the similarity between the local and global models. Extensive experimental analyses on different tasks demonstrate that the two algorithms, FedSR and Dyn-FedSR, significantly outperform the state-of-the-art FL algorithms such as Federated Averaging (FedAvg) algorithm, Federated Proximal (FedProx) optimization algorithm and Stochastic Controlled Averaging algorithm (SCAFFOLD) in various scenarios, and can achieve efficient communication and high accuracy, as well as the robustness to imbalanced data and uncertain local updates.

Table and Figures | Reference | Related Articles | Metrics